perm filename REPORT[E76,JMC] blob sn#227662 filedate 1976-07-27 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00004 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	∂25-JUL-76  2304	BG   
C00007 00003	We intend to continue our work in representation theory by studying in 
C00012 00004	∂26-JUL-76  1044	ZM   
C00020 ENDMK
C⊗;
∂25-JUL-76  2304	BG   
John, this is the revised version of the verification stuff.

We have verified in FOL the correctness of the McCarthy-Painter compiler,
which translates additive arithmetic expressions into a simple
assembly language.  This is the beginning of a project which we hope will
lead both to new FOL features and to new techniques in program
verification.  There are three reasons why this project is beginning
with relatively simple correctness proofs.
 1.  Correctness proofs for complex programs are simply too difficult
     at present.  In addition to conceptual difficulties, carrying out
     a substantial correctness would be unbearably long in the
     present version of FOL.
 2.  The simpler proofs already constitute adequate test cases
     for new FOL features (which will help make more complex
     proofs feasible).
 3.  The difficulty of proving correctness of, e.g., a FORTRAN
     or COBOL program is in part due to intrinsic structure;
     that is, the design of these languages did not take ease of
     verification into consideration.  In fact, we hope that
     ease of verification will be a design criterion for the next
     generation of computer languages.
     
Let us now delineate some of our specific goals.  First, we are
writing new FOL code to implement "syntactic simplification," a proof
procedure which amounts to simplifying a statement as much as
possible modulo judiciously selected definitions and assertions.  We
expect that this code will shorten many of our proofs, in particular
proofs of program correctness, by an order of magnitude.  This code
should be completed by October 1976.

Second, the correctness proof will be extended to more complex compilers,
specifically compilers for extensions of the McCarthy-Painter language
and for subsets of LISP.  These extensions actually present new
conceptual problems; that is, new FOL techniques will be required
to carry out these proofs.  These proofs should be complete by the
fall of 1977.

Third, we will formalize proofs of properties of programs other than
correctness.  Because the ability to treat programs as objects and to
discuss their properties in a formal language is not available in
program verifiers, we want to see what results can be obtained in
FOL.  As an example, one problem of this kind is to compare the
efficiency of different programs which implement a single sorting
algorithm.  This work will require more sophisticated use of existing
FOL features (e.g., axiom schemata); it may also suggest new proof
techniques and consequently new code.  The first results in this area
should appear early in 1977.

Fourth, we expect that trying to prove properties of real programs
will require new code to describe familiar mathematical systems
efficiently.  For example, a routine which decides the truth of
statements about real numbers would be a useful tool for proving
the correctness of a program for solving a set of simultaneous
linear equations.  

We intend to continue our work in representation theory by studying in 
detail the relation between knowledge and action.  Most AI work to this
point has ignored the fact that acting effectively requires knowledge.
Typical systems assume that an actor has all the knowledge that is 
relevant to achieving his goals.  In the real world this is not the
case. Actors must be aware of when they lack knowledge, and they must
have means available to them for acquiring knowledge.

We are designing a system for reasoning about commonsense plans which
require knowledge and involve the acquisition of knowledge.  We want to
consider whether an actor can achieve a certain goal by carrying out
a particular plan.  For example, an actor can carry out a conditional
plan (if A is true do B, otherwise do C) only if he knows whether the
conditional is true, and he knows how to carry out the correct branch
of the conditional.

We feel that one important application of work on representaion theory,
particularly representing facts about knowledge, could be qualitative
simulation for policy evaluation and planning.  Computers could have a
much broader impact on policy formulation if they were able to adequately
represent qualitative aspects of problem situations.  Existing methods,
such as Forrester's system dynamics, tempt one to ignore aspects of 
reality which cannot be represented as aggregate quantities or rates of
flow.  A mathematical model for energy policy could express oil supply
as a function of price, capital investment, etc., but it would be very
difficult to represent the political factors that might lead to an
Arab oil embargo.  Representing reality by sets of facts to be reasoned
with seems to be a much more powerful approach.

A system based on the kind of models we envision could have a data-base
containing a general description of the policy domain and general policy
goals. It could then consider specific problems and suggested policy
responses, together with the policy maker's assesment of the likely
results of the policies.  The system's goal would be to point out
unanticipated impacts of policies on overall policy goals.

It is clear that in such a system reasoning about knowledge would be
important.  Policy goals often include the acquisition, dissemination,
or withholding of knowledge.  It is also clear that the techniques
for reasoning about knowledge will have to be very general.  Simply
listing known and unknown facts will not do.  We will need to say
things like Actor A knows everything that Actor B knows about topic
X, or that at time T we will have sufficient knowledge to achieve
goal G, even though we cannot presently specify what that knowledge is.

We propose to begin investigating this area by looking at serious policy
games.  We expect to determine if the state of the art will permit
formalization of the knowledge required to play these games by Nov. 1, 1976.
If the situation appears promising, we hope to have a design for a prototype
system for policy analysis in the game context by June 1, 1977.



∂26-JUL-76  1044	ZM   
ZM

1. Intermittent assertions in program verification.
 
The "intermittent-assertion method"
is a new verification technique, first suggested by Burstall[1]
and recently advocated by Manna and Waldinger[2].
Using this method, comments are affixed
to points in the program with the intent that sometime
control will pass through the point and the values of the variables
will satisfy the relationship expressed in the assertion.  Consequently,
control may pass the point many times without satisfying the assertion,
though atleast once control will pass with the assertion satisfied; we
therefore call these comments "intermittent assertions".
This method differs from the classical "invariant-assertion approach"
(Floyd[3]) in which the comments, called "invariant assertions",
are understood to hold every time control passes the corresponding point.

Intermittent assertions have several theoretical advantages over
invariant assertions: (a) they establish the program's "total"
correctness, i.e., both that the program satisfies its specifications
upon termination and that the program indeed terminates; (b) it has
proved to be more powerful than any other existing method; (c) it is
particularly useful in proving the validity of program transformations
and properties of continuously operating program.

During the forthcoming year, we plan to investigate the more practical
aspects of the method and to begin work towards the automation of
this method of verification.

2. Automatic annotation in program verification.

None of the existing implementations of the invariant-assertion method
of program verification are fully automatic: the user must supply the
assertions himself.  This deficiency has of course been recognized and
there has substantial recent effort to automate the process of 
invariant-assertion generation (e.g., German and Wegbreit[4]).

In Katz and Manna[5] we suggested rules for generating invariant assertions
from the assignments and tests of the program,
and distinguished between two general approaches: (a) the "algorithmic"
rules, by which relations between variables are directly derived 
from the program statements in such a manner as to guarantee that 
they are indeed invariant assertions;  (b) the "heuristic" rules, 
by which promising candidates for invariant assertions are suggested, 
though they are not guaranteed to be invariants.

During the next year, we plan to modify and extend these techniques.
On that basis, we plan to implement a system that will automatically
generate invariant assertions, with capabilities exceeding existing
implementations.

3. A generalized theory of fixedpoints.

A recursive definition can be considered as an implicit functional
equation with many possible solutions, called "fixedpoints".  While
the prevalent approach concentrates on one specific solution --
the "least fixedpoint", we plan to investigate the behavior of the
set of solutions as a whole.  The purpose of this study is to
gain a better understanding of the relationship between recursive
definitions and their fixedpoints.  Properties of specific fixedpoints,
e.g., the least and maximal fixedpoints, may be viewed within the framework of
this generalized theory.

For example, in Manna and Shamir[6] we define a new type of fixedpoint,
the "optimal fixedpoint", which contains the maximal amount of "interesting"
information which can be extracted from the recursive definition.
We have also extended and generalized Kleene's[7] classical
result that the totally undefined function converges to the least
fixedpoint under the "direct" access method.  We have shown that under
a composition of the "direct" and "descending" access methods, any
initial function converges to a "close" fixedpoint; though no
single access method enjoys this property.





__________________

1. Burstall, R.M. [Aug. 1974], "Program proving as hand simulation with a
little induction", Proc. IFIP, Stockholm.

2. Manna, Z. and R.J. Waldinger [June 1976], "Is SOMETIME sometimes
better than ALWAYS? Intermittent assertions in proving program
correctness", Memo AIM-281, Artificial Intelligence Laboratory,
Stanford University.

3. Floyd,  R.W. [1967], "Assigning  meanings to  programs", Proc. 
Symp. in Applied  Mathematics, V. 19,  (J.T. Schwartz,  ed.),
American Mathematical Society, Providence, R.I., pp. 19-32.

4. German, S.M., and B. Wegbreit [Mar. 1975], "A synthesizer of inductive 
assertions", IEEE Trans. on Software Engineering, V. SE-1, No. 1, pp. 68-75.

5. Katz, S.M. and Z. Manna [Apr. 1976], "Logical analysis of programs", CACM,
V. 19, No. 4, pp. 188-206.

6. Manna, Z. and A. Shamir [Sept. 1976], "The theoretical aspects of the
optimal fixedpoint", SIAM J. of Computing.

7. Kleene, S.C. [1952], "Introduction to meta-mathematics",
D. Van Nostrand, Princeton, N.J.